Goto

Collaborating Authors

 chemical reaction network


Anticipating the Selectivity of Intramolecular Cyclization Reaction Pathways with Neural Network Potentials

Casetti, Nicholas, Anstine, Dylan, Isayev, Olexandr, Coley, Connor W.

arXiv.org Artificial Intelligence

Reaction mechanism search tools have demonstrated the ability to provide insights into likely products and rate-limiting steps of reacting systems. However, reactions involving several concerted bond changes - as can be found in many key steps of natural product synthesis - can complicate the search process. To mitigate these complications, we present a mechanism search strategy particularly suited to help expedite exploration of an exemplary family of such complex reactions, cyclizations. We provide a cost-effective strategy for identifying relevant elementary reaction steps by combining graph-based enumeration schemes and machine learning techniques for intermediate filtering. Key to this approach is our use of a neural network potential (NNP), AIMNet2-rxn, for computational evaluation of each candidate reaction pathway. In this article, we evaluate the NNP's ability to estimate activation energies, demonstrate the correct anticipation of stereoselectivity, and recapitulate complex enabling steps in natural product synthesis.


e5e63da79fcd2bebbd7cb8bf1c1d0274-Reviews.html

Neural Information Processing Systems

In total, the paper is meticulous in suggesting the framework of chemical reaction networks and mapping belief propagation to it, but the experiments appear a little lacking in real scope. Suggestions: a)more and more convincing experiments with more complicated and/or bigger graphs b)better theoretical explanation of damped BP in relation to this work c) discuss how reaction speeds can be implemented in reality with different kappas. I expect them to be regulated through chemical compounds, which would most likely lead to discrete subsampling of the speed-space. Would this lead to local minima or other problems during inference? Are the assumptions of the'perfect chemical reaction network' based on arbitrary species realistic? Where's the catch if graphs get bigger and have largewr state spaces and hundreds/thousands of chemical species are needed to implement a problem.


Message Passing Inference with Chemical Reaction Networks

Nils E. Napp, Ryan P. Adams

Neural Information Processing Systems

Recent work on molecular programming has explored new possi bilities for computational abstractions with biomolecules, including log ic gates, neural networks, and linear systems. In the future such abstractions might en able nanoscale devices that can sense and control the world at a molecular scale. Jus t as in macroscale robotics, it is critical that such devices can learn about th eir environment and reason under uncertainty. At this small scale, systems are typi cally modeled as chemical reaction networks. In this work, we develop a procedure that can take arbitrary probabilistic graphical models, represented as factor gra phs over discrete random variables, and compile them into chemical reaction network s that implement inference. In particular, we show that marginalization based on s um-product message passing can be implemented in terms of reactions between che mical species whose concentrations represent probabilities. W e show algebrai cally that the steady state concentration of these species correspond to the marginal d istributions of the random variables in the graph and validate the results in simula tions.


Message Passing Inference with Chemical Reaction Networks

Neural Information Processing Systems

Recent work on molecular programming has explored new possibilities for computational abstractions with biomolecules, including logic gates, neural networks, and linear systems. In the future such abstractions might enable nanoscale devices that can sense and control the world at a molecular scale. Just as in macroscale robotics, it is critical that such devices can learn about their environment and reason under uncertainty. At this small scale, systems are typically modeled as chemical reaction networks. In this work, we develop a procedure that can take arbitrary probabilistic graphical models, represented as factor graphs over discrete random variables, and compile them into chemical reaction networks that implement inference.


Integrating Large Language Models For Monte Carlo Simulation of Chemical Reaction Networks

Gyawali, Sadikshya, Mandal, Ashwini, Dahal, Manish, Awale, Manish, Rijal, Sanjay, Adhikari, Shital, Ojha, Vaghawan

arXiv.org Artificial Intelligence

Chemical reaction network is an important method for modeling and exploring complex biological processes, bio-chemical interactions and the behavior of different dynamics in system biology. But, formulating such reaction kinetics takes considerable time. In this paper, we leverage the efficiency of modern large language models to automate the stochastic monte carlo simulation of chemical reaction networks and enable the simulation through the reaction description provided in the form of natural languages. We also integrate this process into widely used simulation tool Copasi to further give the edge and ease to the modelers and researchers. In this work, we show the efficacy and limitations of the modern large language models to parse and create reaction kinetics for modelling complex chemical reaction processes.


Computing Threshold Circuits with Bimolecular Void Reactions in Step Chemical Reaction Networks

Anderson, Rachel, Fu, Bin, Massie, Aiden, Mukhopadhyay, Gourab, Salinas, Adrian, Schweller, Robert, Tomai, Evan, Wylie, Tim

arXiv.org Artificial Intelligence

Step Chemical Reaction Networks (step CRNs) are an augmentation of the Chemical Reaction Network (CRN) model where additional species may be introduced to the system in a sequence of ``steps.'' We study step CRN systems using a weak subset of reaction rules, \emph{void} rules, in which molecular species can only be deleted. We demonstrate that step CRNs with only void rules of size (2,0) can simulate threshold formulas (TFs) under linear resources. These limited systems can also simulate threshold \emph{circuits} (TCs) by modifying the volume of the system to be exponential. We then prove a matching exponential lower bound on the required volume for simulating threshold circuits in a step CRN with (2,0)-size rules under a restricted \emph{gate-wise} simulation, thus showing our construction is optimal for simulating circuits in this way.


Message Passing Inference with Chemical Reaction Networks

Neural Information Processing Systems

Recent work on molecular programming has explored new possibilities for computational abstractions with biomolecules, including logic gates, neural networks, and linear systems. In the future such abstractions might enable nanoscale devices that can sense and control the world at a molecular scale. Just as in macroscale robotics, it is critical that such devices can learn about their environment and reason under uncertainty. At this small scale, systems are typically modeled as chemical reaction networks. In this work, we develop a procedure that can take arbitrary probabilistic graphical models, represented as factor graphs over discrete random variables, and compile them into chemical reaction networks that implement inference. In particular, we show that marginalization based on sum-product message passing can be implemented in terms of reactions between chemical species whose concentrations represent probabilities. We show algebraically that the steady state concentration of these species correspond to the marginal distributions of the random variables in the graph and validate the results in simulations. As with standard sum-product inference, this procedure yields exact results for tree-structured graphs, and approximate solutions for loopy graphs.


Autonomous Learning of Generative Models with Chemical Reaction Network Ensembles

Poole, William, Ouldridge, Thomas E., Gopalkrishnan, Manoj

arXiv.org Artificial Intelligence

Can a micron sized sack of interacting molecules autonomously learn an internal model of a complex and fluctuating environment? We draw insights from control theory, machine learning theory, chemical reaction network theory, and statistical physics to develop a general architecture whereby a broad class of chemical systems can autonomously learn complex distributions. Our construction takes the form of a chemical implementation of machine learning's optimization workhorse: gradient descent on the relative entropy cost function. We show how this method can be applied to optimize any detailed balanced chemical reaction network and that the construction is capable of using hidden units to learn complex distributions. This result is then recast as a form of integral feedback control. Finally, due to our use of an explicit physical model of learning, we are able to derive thermodynamic costs and trade-offs associated to this process.


Information geometric bound on general chemical reaction networks

Mizohata, Tsuyoshi, Kobayashi, Tetsuya J., Bouchard, Louis-S., Miyahara, Hideyuki

arXiv.org Machine Learning

We investigate the dynamics of chemical reaction networks (CRNs) with the goal of deriving an upper bound on their reaction rates. This task is challenging due to the nonlinear nature and discrete structure inherent in CRNs. To address this, we employ an information geometric approach, using the natural gradient, to develop a nonlinear system that yields an upper bound for CRN dynamics. We validate our approach through numerical simulations, demonstrating faster convergence in a specific class of CRNs. This class is characterized by the number of chemicals, the maximum value of stoichiometric coefficients of the chemical reactions, and the number of reactions. We also compare our method to a conventional approach, showing that the latter cannot provide an upper bound on reaction rates of CRNs. While our study focuses on CRNs, the ubiquity of hypergraphs in fields from natural sciences to engineering suggests that our method may find broader applications, including in information science.


Message Passing Inference with Chemical Reaction Networks

Neural Information Processing Systems

Recent work on molecular programming has explored new possibilities for computational abstractions with biomolecules, including logic gates, neural networks, and linear systems. In the future such abstractions might enable nanoscale devices that can sense and control the world at a molecular scale. Just as in macroscale robotics, it is critical that such devices can learn about their environment and reason under uncertainty. At this small scale, systems are typically modeled as chemical reaction networks. In this work, we develop a procedure that can take arbitrary probabilistic graphical models, represented as factor graphs over discrete random variables, and compile them into chemical reaction networks that implement inference.